Sentinel-1 RTC Gamma0 data No description has been provided for this image

Index¶

  • Overview
  • Setup (imports, defaults, dask, odc)
  • Example query
  • Product definition
  • Quality layer
  • Create and apply a good quality pixel mask
  • Plot and browse the data

Overview¶

This notebook demonstrates how to load and use Sentinel-1 Radiometric Terrain Corrected (RTC) Gamma0 data generated in EASI.

These analysis ready data S1 gamma-0 backscatter data are processed from Sentinel-1 GRD scenes using the SNAP-10 Toolbox with Graph Processing Tool (GPT) xml receipes. See the RTC Gamma0 product variants section for further details.

For most uses we recommend the smoothed 20 m product (sentinel1_grd_gamma0_20m). We can process the 10 m products (sentinel1_grd_gamma0_10m, sentinel1_grd_gamma0_10m_unsmooth) and other variants on request.

Using Sentinel-1 backscatter data¶

An excellent introduction and overview to using SAR data is provided in the CEOS Laymans SAR Interpretation Guide. This guide has also been converted to a set of set of Jupyter notebooks that you can download from https://github.com/AMA-Labs/cal-notebooks/tree/main/examples/SAR.

Synthetic Aperture Radar operates in the microwave range of the electromagnetic spectrum as an active pulse sent by the satellite and scattered by features on the Earth's surface. The return signal from the surface is measured at the satellite in terms of the signal intensity, phase and polarisation compared to the signal that was sent.

The SAR instrument on the Sentinel-1 satellites operate in the C-band at approximately 5.6 cm wavelength. This means that it can "see" objects of about this size and larger, and smaller objects are relatively transparent. This makes Sentinel-1 more sensitive to tree canopies, sparse and low biomass vegetation, and surface water (smooth and wind affected).

The SAR signal responds to the orientation and scattering from surface features of comparable size or larger than the wavelength.

  • A bright backscatter value typically means the surface was orientated perpendicular to the signal incidence angle and most of the signal was reflected back to the satellite (direct backscatter)
  • A dark backscatter value means most of the signal was reflected away from the satellite (forward scattering) and typically responds to a smooth surface (relative to the wavelength) such as calm water or bare soil
  • Rough surfaces (relative to the wavelength) result in diffuse scattering where some of the signal is returned to the satellite.
  • Complex surfaces may result in volume scattering (scattering within a tree canopy) or double-bounce scattering (perpendicular objects such as buildings and structures)
  • The relative backscatter values of co-polarisation (VV) and cross-polarisation (VH) measurements can provide information on the scattering characteristics of the surface features.

Using Sentinel-1 backscatter data requires interpretation of the data for different surface features, including as these features change spatially or in time. It may also be necessary to carefully consider the incidence angle of the SAR signal relative to the surface features using the incidence_angle band or the satellite direction metadata (descending = north to south; ascending = south to north).

Units and conversions¶

The sentinel1_grd_gamma0_* data are given in Intensity (or backscatter power) units. Intensity can be converted to decibel (dB) or amplitude, and vice-versa, with the following equations. Practical Xarray examples are given below.

Intensity to/from dB:

       dB = 10 * log10(intensity) + K
intensity = 10^((dB-K)/10)

where K is a calibration factor, which for Sentinel-1 is 0 dB.

Intensity to/from Amplitude:

intensity = amplitude * amplitude
amplitude = sqrt(intensity)

Additional reference: https://forum.step.esa.int/t/what-stage-of-processing-requires-the-linear-to-from-db-command

Set up¶

Imports¶

In [1]:
# Common imports and settings
import os, sys, re
from pathlib import Path
from IPython.display import Markdown
import pandas as pd
pd.set_option("display.max_rows", None)
import xarray as xr
import numpy as np

# Datacube
import datacube
from datacube.utils.aws import configure_s3_access
import odc.geo.xr                             # https://github.com/opendatacube/odc-geo
from datacube.utils import masking            # https://github.com/opendatacube/datacube-core/blob/develop/datacube/utils/masking.py
from dea_tools.plotting import display_map    # https://github.com/GeoscienceAustralia/dea-notebooks/tree/develop/Tools

# Basic plots
%matplotlib inline
# import matplotlib.pyplot as plt
# plt.rcParams['figure.figsize'] = [12, 8]

# Holoviews
# https://holoviz.org/tutorial/Composing_Plots.html
# https://holoviews.org/user_guide/Composing_Elements.html
import hvplot.xarray
import panel as pn
In [2]:
# EASI defaults
# These are convenience functions so that the notebooks in this repository work in all EASI deployments

# The `git.Repo()` part returns the local directory that easi-notebooks has been cloned into
# If using the `easi-tools` functions from another path, replace `repo` with your local path to `easi-notebooks` directory
try:
    import git
    repo = git.Repo('.', search_parent_directories=True).working_tree_dir    # Path to this cloned local directory
except (ImportError, git.InvalidGitRepositoryError):
    repo = Path.home() / 'easi-notebooks'    # Reasonable default
    if not repo.is_dir():
        raise RuntimeError('To use `easi-tools` please provide the local path to `https://github.com/csiro-easi/easi-notebooks`')
if repo not in sys.path:
    sys.path.append(str(repo))    # Add the local path to `easi-notebooks` to python

from easi_tools import EasiDefaults
from easi_tools import initialize_dask, xarray_object_size, heading

EASI defaults¶

These default values are configured for each EASI instance. They help us to use the same training notebooks in each EASI instance. You may find some of the functions convenient for your work or you can easily override the values in your copy of this notebook.

In [3]:
easi = EasiDefaults()

family = 'sentinel-1'
product = easi.product(family)   # 'sentinel1_grd_gamma0_20m'
display(Markdown(f'Default {family} product for "{easi.name}": [{product}]({easi.explorer}/products/{product})'))
Successfully found configuration for deployment "asia"

Default sentinel-1 product for "asia": sentinel1_grd_gamma0_20m

Dask cluster¶

Using a local Dask cluster is a good habit to get into. It can simplify loading and processing of data in many cases, and it provides a dashboard that shows the loading/processing progress.

To learn more about Dask see the set of dask notebooks.

In [4]:
# Local cluster
cluster, client = initialize_dask(workers=4)
display(client)

# Or use Dask Gateway - this may take a few minutes
# cluster, client = initialize_dask(use_gateway=True, workers=4)
# display(client)
Successfully found configuration for deployment "asia"

Client

Client-99acfca2-b1c6-11ef-8729-0e10825650b1

Connection method: Cluster object Cluster type: distributed.LocalCluster
Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/8787/status

Cluster Info

LocalCluster

26724652

Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/8787/status Workers: 4
Total threads: 8 Total memory: 24.00 GiB
Status: running Using processes: True

Scheduler Info

Scheduler

Scheduler-8c6857be-dd12-4ff5-89e5-77a4cbe9d8c8

Comm: tcp://127.0.0.1:38869 Workers: 4
Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/8787/status Total threads: 8
Started: Just now Total memory: 24.00 GiB

Workers

Worker: 0

Comm: tcp://127.0.0.1:45817 Total threads: 2
Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/44831/status Memory: 6.00 GiB
Nanny: tcp://127.0.0.1:45763
Local directory: /tmp/dask-scratch-space/worker-k7awmbsp

Worker: 1

Comm: tcp://127.0.0.1:40023 Total threads: 2
Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/36455/status Memory: 6.00 GiB
Nanny: tcp://127.0.0.1:44089
Local directory: /tmp/dask-scratch-space/worker-qoj9t6zz

Worker: 2

Comm: tcp://127.0.0.1:33509 Total threads: 2
Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/37251/status Memory: 6.00 GiB
Nanny: tcp://127.0.0.1:38007
Local directory: /tmp/dask-scratch-space/worker-fzkfc152

Worker: 3

Comm: tcp://127.0.0.1:41309 Total threads: 2
Dashboard: https://hub.asia.easi-eo.solutions/user/pag064/proxy/36863/status Memory: 6.00 GiB
Nanny: tcp://127.0.0.1:45095
Local directory: /tmp/dask-scratch-space/worker-cyh5kiac

ODC database¶

Connect to the ODC database. Configure the environment and low-level tools to read from AWS buckets.

In [5]:
dc = datacube.Datacube()

# Access AWS "requester-pays" buckets
# This is necessary for reading data from most third-party AWS S3 buckets such as for Landsat and Sentinel-2
configure_s3_access(aws_unsigned=False, requester_pays=True, client=client);

Example query¶

Change any of the parameters in the query object below to adjust the location, time, projection, or spatial resolution of the returned datasets.

Use the Explorer interface to check the temporal and spatial coverage for each product.

In [6]:
# Explorer link
display(Markdown(f'See: {easi.explorer}/products/{product}'))

# EASI defaults
display(Markdown(f'#### Location: {easi.location}'))
latitude_range = easi.latitude
longitude_range = easi.longitude
time_range = easi.time

# Or set your own latitude / longitude
# Australia GWW
# latitude_range = (-33, -32.6)
# longitude_range = (120.5, 121)
# time_range = ('2020-01-01', '2020-01-31')

query = {
    'product': product,       # Product name
    'x': longitude_range,     # "x" axis bounds
    'y': latitude_range,      # "y" axis bounds
    'time': time_range,       # Any parsable date strings
}

# Convenience function to display the selected area of interest
display_map(longitude_range, latitude_range)

See: https://explorer.asia.easi-eo.solutions/products/sentinel1_grd_gamma0_20m

Location: Lake Tempe, Indonesia¶

Out[6]:
Make this Notebook Trusted to load map: File -> Trust Notebook

Load data¶

In [7]:
# Target xarray parameters
# - Select a set of measurements to load
# - output CRS and resolution
# - Usually we group input scenes on the same day to a single time layer (groupby)
# - Select a reasonable Dask chunk size (this should be adjusted depending on the
#   spatial and resolution parameters you choose
load_params = {
    'group_by': 'solar_day',                        # Scene grouping
    'dask_chunks': {'latitude':2048, 'longitude':2048},      # Dask chunks
}

# Load data
data = dc.load(**(query | load_params))
display(xarray_object_size(data))
display(data)
'Dataset size: 772.50 MB'
<xarray.Dataset> Size: 810MB
Dimensions:      (time: 30, latitude: 1500, longitude: 1500)
Coordinates:
  * time         (time) datetime64[ns] 240B 2020-02-02T10:17:28.500000 ... 20...
  * latitude     (latitude) float64 12kB -3.9 -3.9 -3.901 ... -4.2 -4.2 -4.2
  * longitude    (longitude) float64 12kB 119.8 119.8 119.8 ... 120.1 120.1
    spatial_ref  int32 4B 4326
Data variables:
    vh           (time, latitude, longitude) float32 270MB dask.array<chunksize=(1, 1500, 1500), meta=np.ndarray>
    vv           (time, latitude, longitude) float32 270MB dask.array<chunksize=(1, 1500, 1500), meta=np.ndarray>
    angle        (time, latitude, longitude) float32 270MB dask.array<chunksize=(1, 1500, 1500), meta=np.ndarray>
Attributes:
    crs:           EPSG:4326
    grid_mapping:  spatial_ref
xarray.Dataset
    • time: 30
    • latitude: 1500
    • longitude: 1500
    • time
      (time)
      datetime64[ns]
      2020-02-02T10:17:28.500000 ... 2...
      units :
      seconds since 1970-01-01 00:00:00
      array(['2020-02-02T10:17:28.500000000', '2020-02-03T21:43:05.500000000',
             '2020-02-04T21:35:48.500000000', '2020-02-07T10:25:35.500000000',
             '2020-02-08T10:18:23.500000000', '2020-02-09T21:43:48.500000000',
             '2020-02-14T10:17:28.500000000', '2020-02-15T21:43:05.500000000',
             '2020-02-16T21:35:48.500000000', '2020-02-19T10:25:35.500000000',
             '2020-02-20T10:18:23.500000000', '2020-02-21T21:43:48.500000000',
             '2020-02-26T10:17:27.500000000', '2020-02-27T21:43:05.500000000',
             '2020-02-28T21:35:48.500000000', '2020-03-02T10:25:35.500000000',
             '2020-03-03T10:18:23.500000000', '2020-03-04T21:43:48.500000000',
             '2020-03-09T10:17:28.500000000', '2020-03-10T21:43:05.500000000',
             '2020-03-11T21:35:48.500000000', '2020-03-14T10:25:35.500000000',
             '2020-03-15T10:18:23.500000000', '2020-03-16T21:43:48.500000000',
             '2020-03-21T10:17:28.500000000', '2020-03-22T21:43:05.500000000',
             '2020-03-23T21:35:48.500000000', '2020-03-26T10:25:35.500000000',
             '2020-03-27T10:18:23.500000000', '2020-03-28T21:43:48.500000000'],
            dtype='datetime64[ns]')
    • latitude
      (latitude)
      float64
      -3.9 -3.9 -3.901 ... -4.2 -4.2 -4.2
      units :
      degrees_north
      resolution :
      -0.0002
      crs :
      EPSG:4326
      array([-3.9001, -3.9003, -3.9005, ..., -4.1995, -4.1997, -4.1999])
    • longitude
      (longitude)
      float64
      119.8 119.8 119.8 ... 120.1 120.1
      units :
      degrees_east
      resolution :
      0.0002
      crs :
      EPSG:4326
      array([119.8001, 119.8003, 119.8005, ..., 120.0995, 120.0997, 120.0999])
    • spatial_ref
      ()
      int32
      4326
      spatial_ref :
      GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]]
      grid_mapping_name :
      latitude_longitude
      array(4326, dtype=int32)
    • vh
      (time, latitude, longitude)
      float32
      dask.array<chunksize=(1, 1500, 1500), meta=np.ndarray>
      units :
      intensity
      nodata :
      nan
      crs :
      EPSG:4326
      grid_mapping :
      spatial_ref
      Array Chunk
      Bytes 257.49 MiB 8.58 MiB
      Shape (30, 1500, 1500) (1, 1500, 1500)
      Dask graph 30 chunks in 1 graph layer
      Data type float32 numpy.ndarray
      1500 1500 30
    • vv
      (time, latitude, longitude)
      float32
      dask.array<chunksize=(1, 1500, 1500), meta=np.ndarray>
      units :
      intensity
      nodata :
      nan
      crs :
      EPSG:4326
      grid_mapping :
      spatial_ref
      Array Chunk
      Bytes 257.49 MiB 8.58 MiB
      Shape (30, 1500, 1500) (1, 1500, 1500)
      Dask graph 30 chunks in 1 graph layer
      Data type float32 numpy.ndarray
      1500 1500 30
    • angle
      (time, latitude, longitude)
      float32
      dask.array<chunksize=(1, 1500, 1500), meta=np.ndarray>
      units :
      degrees
      nodata :
      nan
      crs :
      EPSG:4326
      grid_mapping :
      spatial_ref
      Array Chunk
      Bytes 257.49 MiB 8.58 MiB
      Shape (30, 1500, 1500) (1, 1500, 1500)
      Dask graph 30 chunks in 1 graph layer
      Data type float32 numpy.ndarray
      1500 1500 30
    • time
      PandasIndex
      PandasIndex(DatetimeIndex(['2020-02-02 10:17:28.500000', '2020-02-03 21:43:05.500000',
                     '2020-02-04 21:35:48.500000', '2020-02-07 10:25:35.500000',
                     '2020-02-08 10:18:23.500000', '2020-02-09 21:43:48.500000',
                     '2020-02-14 10:17:28.500000', '2020-02-15 21:43:05.500000',
                     '2020-02-16 21:35:48.500000', '2020-02-19 10:25:35.500000',
                     '2020-02-20 10:18:23.500000', '2020-02-21 21:43:48.500000',
                     '2020-02-26 10:17:27.500000', '2020-02-27 21:43:05.500000',
                     '2020-02-28 21:35:48.500000', '2020-03-02 10:25:35.500000',
                     '2020-03-03 10:18:23.500000', '2020-03-04 21:43:48.500000',
                     '2020-03-09 10:17:28.500000', '2020-03-10 21:43:05.500000',
                     '2020-03-11 21:35:48.500000', '2020-03-14 10:25:35.500000',
                     '2020-03-15 10:18:23.500000', '2020-03-16 21:43:48.500000',
                     '2020-03-21 10:17:28.500000', '2020-03-22 21:43:05.500000',
                     '2020-03-23 21:35:48.500000', '2020-03-26 10:25:35.500000',
                     '2020-03-27 10:18:23.500000', '2020-03-28 21:43:48.500000'],
                    dtype='datetime64[ns]', name='time', freq=None))
    • latitude
      PandasIndex
      PandasIndex(Index([-3.9001000000000006, -3.9003000000000005, -3.9005000000000005,
             -3.9007000000000005, -3.9009000000000005, -3.9011000000000005,
             -3.9013000000000004, -3.9015000000000004, -3.9017000000000004,
             -3.9019000000000004,
             ...
                         -4.1981,  -4.198300000000001,  -4.198500000000001,
             -4.1987000000000005,  -4.198900000000001, -4.1991000000000005,
              -4.199300000000001, -4.1995000000000005,  -4.199700000000001,
                         -4.1999],
            dtype='float64', name='latitude', length=1500))
    • longitude
      PandasIndex
      PandasIndex(Index([119.80010000000001, 119.80030000000002, 119.80050000000001,
             119.80070000000002, 119.80090000000001, 119.80110000000002,
             119.80130000000001, 119.80150000000002, 119.80170000000001,
             119.80190000000002,
             ...
             120.09810000000002, 120.09830000000001, 120.09850000000002,
             120.09870000000001, 120.09890000000001, 120.09910000000002,
             120.09930000000001, 120.09950000000002, 120.09970000000001,
             120.09990000000002],
            dtype='float64', name='longitude', length=1500))
  • crs :
    EPSG:4326
    grid_mapping :
    spatial_ref
In [8]:
# When happy with the shape and size of chunks, persist() the result
data = data.persist()

Conversion and helper functions¶

In [9]:
# These functions use numpy, which should be satisfactory for most notebooks.
# Calculations for larger or more complex arrays may require Xarray's "ufunc" capability.
# https://docs.xarray.dev/en/stable/examples/apply_ufunc_vectorize_1d.html
#
# Apply numpy.log10 to the DataArray
# log10_data = xr.apply_ufunc(np.log10, data)

def intensity_to_db(da: 'xr.DataArray', K=0):
    """Return an array converted to dB values"""
    xx = da.where(da > 0, np.nan)  # Set values <= 0 to NaN
    xx = 10*np.log10(xx) + K
    xx.attrs.update({"units": "dB"})
    return xx

def db_to_intensity(da: 'xr.DataArray', K=0):
    """Return an array converted to intensity values"""
    xx = np.power(10, (da-K)/10.0)
    xx.attrs.update({"units": "intensity"})
    return xx

def select_valid_time_layers(ds: 'xarray', percent: float = 5):
    """Select time layers that have at least a given percentage of valid data (e.g., >=5%)

    Example usage:
      selected = select_valid_time_layers(ds, percent=5)
      filtered == ds.sel(time=selected)
    """
    spatial_dims = ds.odc.spatial_dims
    return ds.count(dim=spatial_dims).values / (ds.sizes[spatial_dims[0]]*ds.sizes[spatial_dims[1]]) >= (percent/100.0)

# Examples to check that the intensity to/from dB functions work as expected
# xx = data.vv.isel(time=0,latitude=np.arange(0, 5),longitude=np.arange(0, 5))
# xx[0] = 0
# xx[1] = -0.001
# display(xx.values)
# yy = intensity_to_db(xx)
# display(yy.values)
# zz = db_to_intensity(yy)
# display(zz.values)
In [10]:
# hvPlot convenience functions
def make_image(ds: 'xarray', frame_height=300, **kwargs):
    """Return a Holoviews DynamicMap (image) object that can be displayed or combined"""
    spatial_dims = ds.odc.spatial_dims
    defaults = dict(
        cmap="Greys_r",
        y = spatial_dims[0], x = spatial_dims[1],
        groupby = 'time',
        rasterize = True,
        geo = True,
        robust = True,
        frame_height = frame_height,
        clabel = ds.attrs.get('units', None),
    )
    defaults.update(**kwargs)
    return ds.hvplot.image(**defaults)

def rgb_image(ds: 'xarray', frame_height=300, **kwargs):
    """Return a Holoviews DynamicMap (RBG image) object that can be displayed or combined"""
    spatial_dims = ds.odc.spatial_dims
    defaults = dict(
        bands='band',
        y = spatial_dims[0], x = spatial_dims[1],
        groupby = 'time',
        rasterize = True,
        geo = True,
        robust = True,
        frame_height = frame_height,
    )
    defaults.update(**kwargs)
    return ds.hvplot.rgb(**defaults)
In [11]:
# Optional time layer filter

selected = select_valid_time_layers(data.vv, 10)  # Exclude time layers with less than 10% valid data
data = data.sel(time=selected).persist()
In [12]:
# Add db values to the dataset

data['vh_db'] = intensity_to_db(data.vh).persist()
data['vv_db'] = intensity_to_db(data.vv).persist()

Plot the data¶

Note the different data ranges for plotting (clim) between vv, vh, intensity and dB.

  • Stronger co-polarisation (VV) indicates direct backscatter while stronger cross-polarisation (VH) may indicate a complex surface or volume scattering.
  • Intensity data are linear-scaled so can tend to disciminate across a range of backscatter returns.
  • Decibel data are log-scaled so can tend to discriminate high and low backscatter returns.
In [13]:
# VV and VH (intensity and dB) and Angle hvPlots

vv_plot = make_image(data.vv, clim=(0, 0.5), title='VV')
vh_plot = make_image(data.vh, clim=(0, 0.1), title='VH')
ia_plot = make_image(data.angle, title='Incidence angle')

vv_db_plot = make_image(data.vv_db, clim=(-30, -3), title='VV')
vh_db_plot = make_image(data.vh_db, clim=(-30, -1), title='VH')
In [14]:
# Arrange plots with linked axes and time slider. Adjust browser window width if required.

layout = pn.panel(
    (vv_plot + vh_plot + ia_plot + vv_db_plot + vh_db_plot).cols(3),
    widget_location='top',
)
print(layout)  # Helpful to see how the hvplot is constructed
layout
Column
    [0] WidgetBox(align=('center', 'start'))
        [0] DiscreteSlider(name='time (seconds s..., options={'2020-02-02 10:17:28': nu...}, value=numpy.datetime64('2020-02-...)
    [1] HoloViews(Layout, widget_location='top')
Out[14]:

Plot a histogram of the dB data¶

A histogram can help separate water from land features. Here we show a histogram for the VH (db) channel for all time layers.

  • If the histogram shows two clear peaks then a value between the peaks could be used reasonably as a water / land threshold
  • If not then try selected time layers, a different area of interest, or other channels or combinations.
In [15]:
# vals, bins, hist_plot = data.vh_db.plot.hist(bins=np.arange(-30, 0, 1), color='red')  # Matplotlib
hist_plot = data.vh_db.hvplot.hist(bins=np.arange(-30, 0, 1), color='red', title='Combined times', height=400)  # hvPlot

print(hist_plot)  # Helpful to see how the hvplot is constructed
hist_plot
:NdOverlay   [Variable]
   :Histogram   [vh_db]   (Count)
Out[15]:

Make an RGB image¶

A common strategy to create an RGB colour composite image for SAR data from two channels is to use the ratio of the channels to represent the third colour. Here we choose

To create an RGB colour composite image we can use the ratio of VH and VH to represent a thried channel. Here we choose

  • Red = VH ... complex scattering
  • Green = VV ... direct scattering
  • Blue = VH/VV ... relatively more complex than direct
In [16]:
# Add the vh/vv band to represent 'blue'
data['vh_vv'] = data.vh / data.vv

# Scale the measurements by their median so they have a similar range for visualization
spatial_dims = data.odc.spatial_dims
data['vh_scaled'] = data.vh / data.vh.median(dim=spatial_dims).persist()
data['vv_scaled'] = data.vv / data.vv.median(dim=spatial_dims).persist()
data['vh_vv_scaled'] = data.vh_vv / data.vh_vv.median(dim=spatial_dims).persist()

# odc-geo function
rgb_data = data.odc.to_rgba(bands=['vh_scaled','vv_scaled','vh_vv_scaled'], vmin=0, vmax=2)
In [17]:
# As subplots
# rgb_plot = rgb_image(
#     rgb_data,
# ).layout().cols(4)

# As movie. Select "loop" and use "-" button to adjust the speed to allow for rendering. After a few cycles the images should play reasonably well.
rgb_plot = rgb_image(
    rgb_data,
    precompute = True,
    widget_type='scrubber', widget_location='bottom',
    frame_height = 500,
)

print(rgb_plot)  # Helpful to see how the hvplot is constructed
rgb_plot
/env/lib/python3.12/site-packages/dask/core.py:127: RuntimeWarning: divide by zero encountered in divide
  return func(*(_execute_task(a, cache) for a in args))
/env/lib/python3.12/site-packages/dask/core.py:127: RuntimeWarning: invalid value encountered in divide
  return func(*(_execute_task(a, cache) for a in args))
/env/lib/python3.12/site-packages/dask/core.py:127: RuntimeWarning: invalid value encountered in divide
  return func(*(_execute_task(a, cache) for a in args))
/env/lib/python3.12/site-packages/dask/core.py:127: RuntimeWarning: invalid value encountered in divide
  return func(*(_execute_task(a, cache) for a in args))
/env/lib/python3.12/site-packages/dask/core.py:127: RuntimeWarning: divide by zero encountered in divide
  return func(*(_execute_task(a, cache) for a in args))
/env/lib/python3.12/site-packages/dask/core.py:127: RuntimeWarning: invalid value encountered in divide
  return func(*(_execute_task(a, cache) for a in args))
/env/lib/python3.12/site-packages/dask/core.py:127: RuntimeWarning: divide by zero encountered in divide
  return func(*(_execute_task(a, cache) for a in args))
/env/lib/python3.12/site-packages/odc/geo/_rgba.py:56: RuntimeWarning: invalid value encountered in cast
  return x.astype("uint8")
/env/lib/python3.12/site-packages/dask/core.py:127: RuntimeWarning: divide by zero encountered in divide
  return func(*(_execute_task(a, cache) for a in args))
/env/lib/python3.12/site-packages/odc/geo/_rgba.py:56: RuntimeWarning: invalid value encountered in cast
  return x.astype("uint8")
/env/lib/python3.12/site-packages/odc/geo/_rgba.py:56: RuntimeWarning: invalid value encountered in cast
  return x.astype("uint8")
/env/lib/python3.12/site-packages/odc/geo/_rgba.py:56: RuntimeWarning: invalid value encountered in cast
  return x.astype("uint8")
Column
    [0] HoloViews(DynamicMap, widget_location='bottom', widget_type='scrubber')
    [1] WidgetBox(align=('center', 'end'))
        [0] Player(end=24, width=550)
Out[17]:

Export to Geotiffs¶

Recall that to write a dask dataset to a file requires the dataset to be .compute()ed. This may result in a large memory increase on your JupyterLab node if the area of interest is large enough, which in turn may kill the kernel. If so then skip this step, choose a smaller area or find a different way to export data.

In [18]:
# Make a directory to save outputs to
target = Path.home() / 'output'
if not target.exists(): target.mkdir()

def write_band(ds, varname):
    """Write the variable name of the xarray dataset to a Geotiff files for each time layer"""
    for i in range(len(ds.time)):
        date = ds[varname].isel(time=i).time.dt.strftime('%Y%m%d').data
        single = ds[varname].isel(time=i).compute()
        single.odc.write_cog(
            fname=f'{target}/example_sentinel-1_{varname}_{date}.tif',
            overwrite=True,
        )
        
write_band(data, 'vv')
write_band(data, 'vh')

Appendix¶

RTC Gamma0 product variants¶

The set of products listed here differ by the selection and configuration of processing steps and options. The set of SNAP operators conform with CEOS Analysis Ready Data specifications for normalised radar backscatter.

S1 gamma-0 backscatter data are processed from Sentinel-1 GRD scenes using the SNAP-10 Toolbox with Graph Processing Tool (GPT) xml receipes (available on request).

sentinel1_grd_gamma0_20m sentinel1_grd_gamma0_10m sentinel1_grd_gamma0_10m_unsmooth
DEM
copernicus_dem_30 Y Y Y
Scene to DEM extent multiplier 3.0 3.0 3.0
SNAP operator
Apply-Orbit-File Y Y Y
ThermalNoiseRemoval Y Y Y
Remove-GRD-Border-Noise Y Y Y
Calibration Y Y Y
SetNoDataValue Y Y Y
Terrain-Flattening Y Y Y
Speckle-Filter Y Y N
Multilook Y Y N
Terrain-Correction Y Y Y
Output
Projection WGS84, epsg:4326 WGS84, epsg:4326 WGS84, epsg:4326
Pixel resolution 20 m 10 m 10 m
Pixel alignmentPixelIsArea = top-left PixelIsArea PixelIsArea PixelIsArea
In [ ]: